Goto

Collaborating Authors

 Bridge


A The Contract Bridge Game

Neural Information Processing Systems

The game of Contract Bridge is played with a standard 52-card deck (4 suits,,, and, with 13 cards in each suit) and 4 players (North, East, South, West). North-South and East-West are two competitive teams. Each player is dealt with 13 cards. There are two phases during the game, namely bidding and playing. After the game, scoring is done based on the won tricks in the playing phase and whether it matches with the contract made in the bidding phase. An example of contract bridge bidding and playing in shown in Figure 1.



e64f346817ce0c93d7166546ac8ce683-AuthorFeedback.pdf

Neural Information Processing Systems

We thank reviewers (R1,R2,R3,R5) for their insightful comments. We thank R5 for pointing out that the "decomposition challenges" in IIG are critical for equilibrium construction where Therefore, our paper could have stronger implications than we expect. We disagree with R2 that the tabular form of JPS indeed has theoretical guarantees, as appreciated by other reviewers. Full game AI is a future work. R5 makes a great point that similarity exists between our policy-change density (Eqn.


A The Contract Bridge Game

Neural Information Processing Systems

The game of Contract Bridge is played with a standard 52-card deck (4 suits,,, and, with 13 cards in each suit) and 4 players (North, East, South, West). North-South and East-West are two competitive teams. Each player is dealt with 13 cards. There are two phases during the game, namely bidding and playing. After the game, scoring is done based on the won tricks in the playing phase and whether it matches with the contract made in the bidding phase. An example of contract bridge bidding and playing in shown in Figure 1.


e64f346817ce0c93d7166546ac8ce683-AuthorFeedback.pdf

Neural Information Processing Systems

We thank reviewers (R1,R2,R3,R5) for their insightful comments. We thank R5 for pointing out that the "decomposition challenges" in IIG are critical for equilibrium construction where Therefore, our paper could have stronger implications than we expect. We disagree with R2 that the tabular form of JPS indeed has theoretical guarantees, as appreciated by other reviewers. Full game AI is a future work. R5 makes a great point that similarity exists between our policy-change density (Eqn.


Set-Based Retrograde Analysis: Precomputing the Solution to 24-card Bridge Double Dummy Deals

arXiv.org Artificial Intelligence

Retrograde analysis is used in game-playing programs to solve states at the end of a game, working backwards toward the start of the game. The algorithm iterates through and computes the perfect-play value for as many states as resources allow. We introduce setrograde analysis which achieves the same results by operating on sets of states that have the same game value. The algorithm is demonstrated by computing exact solutions for Bridge double dummy card-play. For deals with 24 cards remaining to be played ( 10 27 states, which can be reduced to 10 15 states using preexisting techniques), we strongly solve all deals. The setrograde algorithm performs a factor of 10 3 fewer search operations than a standard retrograde algorithm, producing a database with a factor of 10 4 fewer entries. For applicable domains, this allows retrograde searching to reach unprecedented search depths. 1 Introduction Some of the early high-performance game-playing programs relied on retrograde analysis and endgame databases for strong play. The most notable example is Checkers, where 39 trillion endgame positions, all those with 10 or fewer pieces, were used as part of the C HINOOK program (Scha-effer et al. 1992), and for solving Checkers (Schaeffer et al. 2007). Endgame databases are also used widely in Chess programs (Chess 2024), as well as in many other games (e.g., for solving A wari (Romein and Bal 2003)). Endgame databases are most effective in games where there are far fewer positions at the end of the game than elsewhere. As a result, they have not been applied in games that do not have this property. For instance, Sturtevant (2003) noted that in 3-player Chinese Checkers a winning arrangement of a single player's pieces in the game has approximately 10 23 possible permutations of the other player's pieces, making it infeasible to store all the variations of even a single winning configuration. While in Chinese Checkers each player has a unique endgame configuration (the other side's piece locations are irrelevant), in Go the locations of both side's pieces in a terminal state are important. Hence these games require significantly different analysis (Berlekamp and Wolfe 1994). In a 4-player trick-based card game such as Bridge, the last two tricks have null 52 2 nullnull 50 2 nullnull 48 2 nullnull 46 2 null = 1 . However, there are only 16 ways for each deal to play out, meaning it is trivial to solve but storing all states (as done in Checkers) is difficult.


A Simple, Solid, and Reproducible Baseline for Bridge Bidding AI

arXiv.org Artificial Intelligence

Contract bridge, a cooperative game characterized by imperfect information and multi-agent dynamics, poses significant challenges and serves as a critical benchmark in artificial intelligence (AI) research. Success in this domain requires agents to effectively cooperate with their partners. This study demonstrates that an appropriate combination of existing methods can perform surprisingly well in bridge bidding against WBridge5, a leading benchmark in the bridge bidding system and a multiple-time World Computer-Bridge Championship winner. Our approach is notably simple, yet it outperforms the current state-of-the-art methodologies in this field. Furthermore, we have made our code and models publicly available as open-source software. This initiative provides a strong starting foundation for future bridge AI research, facilitating the development and verification of new strategies and advancements in the field.


BridgeHand2Vec Bridge Hand Representation

arXiv.org Artificial Intelligence

Contract bridge is a game characterized by incomplete information, posing an exciting challenge for artificial intelligence methods. This paper proposes the BridgeHand2Vec approach, which leverages a neural network to embed a bridge player's hand (consisting of 13 cards) into a vector space. The resulting representation reflects the strength of the hand in the game and enables interpretable distances to be determined between different hands. This representation is derived by training a neural network to estimate the number of tricks that a pair of players can take. In the remainder of this paper, we analyze the properties of the resulting vector space and provide examples of its application in reinforcement learning, and opening bid classification. Although this was not our main goal, the neural network used for the vectorization achieves SOTA results on the DDBP2 problem (estimating the number of tricks for two given hands).


For Ukraine, the fight is often a game of bridges

The Japan Times

KHERSON REGION, Ukraine โ€“ The pontoon bridge had been in place for barely a day. The Ukrainian army rushed to move troops and equipment across. Then the soldiers watched on a drone video feed as the Russians blew up their bridge, yet again. "Yes, they hit the bridge," the drone pilot said matter-of-factly, peering at images beamed in from a safe distance, a mile or so away. This could be due to a conflict with your ad-blocking or security software.


AI Wins Paris Bridge Competition: Future Tense for Human Beings?

#artificialintelligence

The race between technology and human beings has been a recurring theme in science fiction for more than a century. With human civilization witnessing phenomenal and unforeseen growth in high tech in the last few decades, the experiments of technology competing with the human agency have become much sharper and more frequent. There is no prize for guessing that much of it is occurring due to the exponential growth of AI. Recently, an artificial intelligence machine won the Paris Bridge Competition over human players and now the future in the gaming industry is dicey for humans. A very recent addition to such a stream of experimental exercises was found in Paris.